Goto

Collaborating Authors

 fairness guarantee



FairnessTransferability SubjecttoBoundedDistributionShift

Neural Information Processing Systems

Given an algorithmic predictor that is "fair" on somesource distribution, will it still be fair on an unknowntarget distribution that differs from the source withinsomebound?


Can Less be More? When Increasing-to-Balancing Label Noise Rates Considered Beneficial

Neural Information Processing Systems

In this paper, we answer the question of when inserting label noise (less informative labels) can instead return us more accurate and fair models. We are primarily inspired by three observations: 1) In contrast to reducing label noise rates, increasing the noise rates is easy to implement; 2) Increasing a certain class of instances' label noise to balance the noise rates (increasing-to-balancing) results in an easier learning problem; 3) Increasing-to-balancing improves fairness guarantees against label bias. In this paper, we first quantify the trade-offs introduced by increasing a certain group of instances' label noise rate w.r.t. the loss of label informativeness and the lowered learning difficulties. We analytically demonstrate when such an increase is beneficial, in terms of either improved generalization power or the fairness guarantees. Then we present a method to insert label noise properly for the task of learning with noisy labels, either without or with a fairness constraint. The primary technical challenge we face is due to the fact that we would not know which data instances are suffering from higher noise, and we would not have the ground truth labels to verify any possible hypothesis. We propose a detection method that informs us which group of labels might suffer from higher noise without using ground truth labels. We formally establish the effectiveness of the proposed solution and demonstrate it with extensive experiments.


Proportional Fairness in Non-Centroid Clustering

Neural Information Processing Systems

We revisit the recently developed framework of proportionally fair clustering, where the goal is to provide group fairness guarantees that become stronger for groups of data points that are large and cohesive. Prior work applies this framework to centroid-based clustering, where points are partitioned into clusters, and the cost to each data point is measured by its distance to a centroid assigned to its cluster. However, real-life applications often do not require such centroids. We extend the theory of proportionally fair clustering to non-centroid clustering by considering a variety of cost functions, both metric and non-metric, for a data point to be placed in a cluster with other data points. Our results indicate that Greedy Capture, a clustering algorithm developed for centroid clustering, continues to provide strong proportional fairness guarantees for non-centroid clustering, although the guarantees are significantly different and establishing them requires novel proof ideas. We also design algorithms for auditing proportional fairness of a given clustering solution. We conduct experiments on real data which suggest that traditional clustering algorithms are highly unfair, while our algorithms achieve strong fairness guarantees with a moderate loss in common clustering objectives.


Online Multi-Class Selection with Group Fairness Guarantee

Zargari, Faraz, Nekouyan, Hossein, Hallett, Lyndon, Sun, Bo, Tan, Xiaoqi

arXiv.org Artificial Intelligence

We study the online multi-class selection problem with group fairness guarantees, where limited resources must be allocated to sequentially arriving agents. Our work addresses two key limitations in the existing literature. First, we introduce a novel lossless rounding scheme that ensures the integral algorithm achieves the same expected performance as any fractional solution. Second, we explicitly address the challenges introduced by agents who belong to multiple classes. To this end, we develop a randomized algorithm based on a relax-and-round framework. The algorithm first computes a fractional solution using a resource reservation approach -- referred to as the set-aside mechanism -- to enforce fairness across classes. The subsequent rounding step preserves these fairness guarantees without degrading performance. Additionally, we propose a learning-augmented variant that incorporates untrusted machine-learned predictions to better balance fairness and efficiency in practical settings.


derive a closed form expression of the minimizer of the squared risk under the Demographic Parity (DP) constraint

Neural Information Processing Systems

We thank all reviewers for their valuable comments. Let us first provide a concise recap of our contributions. We propose an efficient post-processing algorithm (Alg. 1) which can be applied on top of any We highlight that contributions i) and iii) are, up to our knowledge, unique. We now address specific points raised by the reviewers, which will be included in the final version upon acceptance. Let us highlight two key differences between "Wasserstein Fair This is an interesting direction of future research. The main difficulty in such an extension for, e.g., Equalized Odds is due to the conditioning on the Notice also that DP is used in several papers, including Jiang et al. discussed above. Let us disagree that the notion of DP is naive. As stated in the paper (ll. However a form of this assumption is rather classical in non-parametric statistics (see e.g., "Fast learning In our settings As. 4.2 is mostly technical and can be "choice of sigma is left to the user ."


Fair Ranking with Noisy Protected Attributes

Neural Information Processing Systems

A reason is that relevance (or utilities) input to ranking algorithms may be influenced by human or societal biases, leading to output rankings that skew representations of socially-salient, and often legally-protected, groups such as women and Black people [55].



Proportional Fairness in Non-Centroid Clustering

Neural Information Processing Systems

We revisit the recently developed framework of proportionally fair clustering, where the goal is to provide group fairness guarantees that become stronger for groups of data points that are large and cohesive. Prior work applies this framework to centroid-based clustering, where points are partitioned into clusters, and the cost to each data point is measured by its distance to a centroid assigned to its cluster. However, real-life applications often do not require such centroids. We extend the theory of proportionally fair clustering to non-centroid clustering by considering a variety of cost functions, both metric and non-metric, for a data point to be placed in a cluster with other data points. Our results indicate that Greedy Capture, a clustering algorithm developed for centroid clustering, continues to provide strong proportional fairness guarantees for non-centroid clustering, although the guarantees are significantly different and establishing them requires novel proof ideas. We also design algorithms for auditing proportional fairness of a given clustering solution.


Multi-agent Multi-armed Bandits with Minimum Reward Guarantee Fairness

Manupriya, Piyushi, Himanshu, null, Jagarlapudi, SakethaNath, Ghalme, Ganesh

arXiv.org Artificial Intelligence

We investigate the problem of maximizing social welfare while ensuring fairness in a multi-agent multi-armed bandit (MA-MAB) setting. In this problem, a centralized decision-maker takes actions over time, generating random rewards for various agents. Our goal is to maximize the sum of expected cumulative rewards, a.k.a. social welfare, while ensuring that each agent receives an expected reward that is at least a constant fraction of the maximum possible expected reward. Our proposed algorithm, RewardFairUCB, leverages the Upper Confidence Bound (UCB) technique to achieve sublinear regret bounds for both fairness and social welfare. The fairness regret measures the positive difference between the minimum reward guarantee and the expected reward of a given policy, whereas the social welfare regret measures the difference between the social welfare of the optimal fair policy and that of the given policy. We show that RewardFairUCB algorithm achieves instance-independent social welfare regret guarantees of $\tilde{O}(T^{1/2})$ and a fairness regret upper bound of $\tilde{O}(T^{3/4})$. We also give the lower bound of $\Omega(\sqrt{T})$ for both social welfare and fairness regret. We evaluate RewardFairUCB's performance against various baseline and heuristic algorithms using simulated data and real world data, highlighting trade-offs between fairness and social welfare regrets.